9 research outputs found

    Coexisting scheduling policies boosting I/O Virtual Machines

    Get PDF
    Abstract. Deploying multiple Virtual Machines (VMs) running various types of workloads on current many-core cloud computing infrastructures raises an important issue: The Virtual Machine Monitor (VMM) has to efficiently multiplex VM accesses to the hardware. We argue that altering the scheduling concept can optimize the system’s overall performance. Currently, the Xen VMM achieves near native performance multiplexing VMs with homogeneous workloads. Yet having a mixture of VMs with different types of workloads running concurrently, it leads to poor I/O performance. Taking into account the complexity of the design and implementation of a universal scheduler, let alone the probability of being fruitless, we focus on a system with multiple scheduling policies that coexist and service VMs according to their workload characteristics. Thus, VMs can benefit from various schedulers, either existing or new, that are optimal for each specific case. In this paper, we design a framework that provides three basic coexisting scheduling policies and implement it in the Xen paravirtualized environment. Evaluating our prototype we experience 2.3 times faster I/O service and link saturation, while the CPU-intensive VMs achieve more than 80 % of current performance.

    The 6G Architecture Landscape:European Perspective

    Get PDF

    Communication Architectures for Clusters, Systems Software, I/O Virtualization, Scalable

    No full text
    Thesis title: Design and impementation of a mechanism transfering data from storage devices to Myrinet networks bypassing the memory hierarchy

    MyriXen: Message Passing in Xen VMs

    No full text
    • Cloud Computing is a significant trend, but is mainly used for consolidating service-oriented environments. • Bridging the gap between Virtualization techniques and High-Performance Network I/O. • HPC interconnects provide abstractions that can be exploited in VM execution environments but lack architectural support. Xen Architecture Xen PV network I/O: • based on a split driver model (netfront / netback): • exports a generic Ethernet API to kernelspace. • driver domain has direct access to hardware, and hosts • the hardware-specific driver, • the protocol-interface driver and • the netback driver • the netfront attaches to the netback driver, which in turn attaches to a dummy network interface bridged with the physical network interface in software. Privileged Guest from netfront netback driver event channel netif API xenbus API VM1 VM2 software bridge ethernet device driver Guest VM Application netfront driver Guest VM Applicatio

    N.: Exploring I/O Virtualization Data paths for MPI Applications in a Cluster of VMs: A Networking Perspective

    No full text
    Abstract. Nowadays, seeking optimized data paths that can increase I/O throughput in Virtualized environments is an intriguing task, especially in a high-performance computing context. This study endeavors to address this issue by evaluating methods for optimized network device access using scientific applications and micro-benchmarks. We examine the network performance bottlenecks that appear in a Cluster of Xen VMs using both generic and intelligent network adapters. We study the network behavior of MPI applications. Our goal is to: (a) explore the implications of alternative data paths between applications and network hardware and (b) specify optimized solutions for scientific applications that put pressure on network devices. To monitor the network load andtheapplications ’ total throughputwebuildacustomtestbedusing different network configurations. We use the Xen bridge mechanism and I/O Virtualization techniques and examine the trade-offs. Specifically, using both generic and intelligent 10GbE network adapters we experiment with assigning network Virtual-Physical Functions to VMs and evaluate the performance of a real scientific application using several networking configurations (multiplexing in hypervisor-level vs. firmwarelevel via IOV techniques). Preliminary results show that a combination of these techniques is essential to overcome network virtualization overheads and achieve near-native performance.

    Efficient I/O device sharing in Virtual Environments

    No full text
    122 σ.Οι υποδομές cloud computing προσφέρουν μεγάλη υπολογιστική ισχύ και φιλοξενούν ένα ευρύ φάσμα εφαρμογών που κυμαίνονται από υπηρεσιοστρεφείς εφαρμογές μέχρι επιστημονικές προσομοιώσεις υψηλών απαιτήσεων. Οι εφαρμογές υψηλών απαιτήσεων (HPC applications) συνήθως εκτελούνται κατανεμημένα, σε μεγάλο αριθμό κόμβων, με αποτέλεσμα η επικοινωνία μεταξύ των κόμβων να παίζει σημαντικό ρόλο στη συνολική επίδοση. Η εκτέλεσή τους σε εικονικά περιβάλλοντα προϋποθέτει την απαλοιφή των ενδιάμεσων επιπέδων του virtualization που προσδίδουν σημαντική επιβάρυνση τόσο στη ρυθμαπόδοση της επικοινωνίας όσο και στο χαμηλό χρόνο απόκρισης κατά την ανταλλαγή μηνυμάτων μεταξύ των κόμβων. Ταυτόχρονα, πρέπει να διατηρούνται τα πλεονεκτήματα του εικονικού περιβάλλοντος, όπως η ευελιξία στην εκτέλεση, το απομονωμένο περιβάλλον καθώς και η ευκολία στη διαχείριση των υποδομών. Οι σύγχρονες μέθοδοι για Είσοδο / Έξοδο σε εικονικά περιβάλλοντα είτε παρουσιάζουν μειωμένη επίδοση στην επικοινωνία, είτε απαιτούν εξειδικευμένο υλικό που πολυπλοκοποιεί τις υποδομές και μειώνει σημαντικά την ευελιξία στη διαχείρισή τους. Στην παρούσα εργασία παρουσιάζεται μια εκτενής μελέτη των μεθόδων Ε/Ε σε εικονικά περιβάλλοντα με έμφαση στη δικτυακή επικοινωνία. Αρχικά περιγράφουμε τις βασικές αρχές των σύγχρονων δικτύων διασύνδεσης υψηλής επίδοσης. Στη συνέχεια αναλύουμε τα επίπεδα του λειτουργικού συστήματος που λαμβάνουν μέρος στην ανταλλαγή μηνυμάτων και περιγράφουμε με λεπτομέρεια τις επιλογές για Ε/Ε σε πλατφόρμες virtualization. Με βάση τη σχετική βιβλιογραφία, που προτείνει κυρίως λύσεις λογισμικού, σχεδιάζουμε και υλοποιούμε το Xen2MX: ένα δίκτυο διασύνδεσης σχεδιασμένο να παρέχει επικοινωνία υψηλής επίδοσης σε εικονικά περιβάλλοντα. Το Xen2MX είναι πλήρως συμβατό με το Myrinet/MX, χωρίς να απαιτεί την ύπαρξη εξειδικευμένων προσαρμογέων δικτύου. Συνδυάζει τα χαρακτηριστικά διαμοιρασμού μνήμης της πλατφόρμας virtualization Xen, με τεχνικές επικοινωνίας μηδενικών αντιγράφων (zero-copy) για να παρέχει στις εικονικές μηχανές απευθείας πρόσβαση στο δίκτυο με το χαμηλότερο δυνατό χρόνο απόκρισης.Cloud computing infrastructures provide vast processing power and host a diverse set of computing workloads, ranging from service-oriented deployments to HPC applications. As HPC applications scale to a large number of VMs, providing near-native network I/O performance to each peer VM is an important challenge. To deploy communication-intensive applications in the cloud, we have to fully exploit the underlying hardware, while at the same time retaining the benefits of virtualization: consolidation, flexibility, isolation, and ease of management. Current approaches present either limited performance or require specialized hardware that increases the complexity of the setup. In this work, we present Xen2MX, a paravirtual interconnection framework, binary compatible with Myrinet/MX and wire compatible with MXoE. Its design is based on the Open-MX protocol, a port of the Myrinet/MX over generic Ethernet adapters. Xen2MX combines the zero-copy characteristics of Open-MX with Xen's memory sharing techniques; the objective is to construct the most efficient data path for high-performance communication in virtualized environments that can be achieved with software techniques. Experimental evaluation of our prototype implementation shows that Xen2MX is able to achieve nearly the same raw performance as Open-MX running in a non-virtualized environment. On the latency front, Xen2MX reduces the RTT latency to less than 60% of the generic paravirtual setup with a software bridge and performs as close as 96% to the directly attached case (IOV). Regarding throughput, Xen2MX is able to nearly saturate a 10Gbps link, achieving 1159MB/s, compared to 1192MB/s of the directly-attached case. Xen2MX scales efficiently with the number of VMs, saturating the link for even smaller messages when 40 single-core VMs put pressure on the network adapters.Αναστάσιος Α. Νάνο

    Deploying MPI applications in a cluster of Xen VMs: A Networking Perspective

    No full text
    Nowadays, seeking optimized data paths that can increase I/O throughput in Virtualized environments is an intriguing task, especially in a high performance computing context. We try to address this issue by evaluating methods for optimized network device access using scientific applications and microbenchmarks. We examine the network performance bottlenecks that appear in a Cluster of Xen VMs using both generic and intelligent network adapters. We study the network behavior of MPI applications. Our goal is to: (a) explore the implications of alternative data paths (direct or indirect) between applications and network hardware and (b) specify optimized solutions for scientific applications that put preasure on network devices. Preliminary results show that a combination of these techniques is essential for scientific applications to achieve nearnative performance in VM environments. Network Configuration (BRIDGED) Figure 1: BRIDGED Configuration This is the default configuration provided by Xen. All guest VMs share a common bridge, setup by the privileged guest. Data flow from applications to the privileged guest via copying or page flipping and thus the software bridge becomes a bottleneck in an HPC context
    corecore